The purpose of these benchmarks is to compare several integrators for use in molecular dynamics simulation. We will use a simulation of liquid argon form the examples of NBodySimulator as test case.
using ProgressLogging using NBodySimulator, OrdinaryDiffEq, StaticArrays using Plots, DataFrames, StatsPlots function setup(t) T = 120.0 # K kb = 1.38e-23 # J/K ϵ = T * kb # J σ = 3.4e-10 # m ρ = 1374 # kg/m^3 m = 39.95 * 1.6747 * 1e-27 # kg N = 216 L = (m*N/ρ)^(1/3) R = 2.25σ v_dev = sqrt(kb * T / m) # m/s _L = L / σ _σ = 1.0 _ϵ = 1.0 _m = 1.0 _v = v_dev / sqrt(ϵ / m) _R = R / σ bodies = generate_bodies_in_cell_nodes(N, _m, _v, _L) lj_parameters = LennardJonesParameters(_ϵ, _σ, _R) pbc = CubicPeriodicBoundaryConditions(_L) lj_system = PotentialNBodySystem(bodies, Dict(:lennard_jones => lj_parameters)); simulation = NBodySimulation(lj_system, (0.0, t), pbc, _ϵ/T) return simulation end
setup (generic function with 1 method)
In order to compare different integrating methods we will consider a fixed simulation time and change the timestep (or tolerances in the case of adaptive methods).
function benchmark(energyerr, rts, bytes, allocs, nt, nf, t, configs) simulation = setup(t) prob = SecondOrderODEProblem(simulation) for config in configs alg = config.alg sol, rt, b, gc, memalloc = @timed solve(prob, alg(); save_everystep=false, progress=true, progress_name="$alg", config...) result = NBodySimulator.SimulationResult(sol, simulation) ΔE = total_energy(result, t) - total_energy(result, 0) energyerr[alg] = ΔE rts[alg] = rt bytes[alg] = b allocs[alg] = memalloc nt[alg] = sol.destats.naccept nf[alg] = sol.destats.nf + sol.destats.nf2 end end function run_benchmark!(results, t, integrators, tol...; c=ones(length(integrators))) @progress "Benchmark at t=$t" for τ in zip(tol...) runtime = Dict() ΔE = Dict() nt = Dict() nf = Dict() b = Dict() allocs = Dict() cfg = config(integrators, c, τ...) GC.gc() benchmark(ΔE, runtime, b, allocs, nt, nf, t, cfg) get_tol(idx) = haskey(cfg[idx], :dt) ? cfg[idx].dt : (cfg[idx].abstol, cfg[idx].rtol) for (idx,i) in enumerate(integrators) push!(results, [string(i), runtime[i], get_tol(idx)..., abs(ΔE[i]), nt[i], nf[i], c[idx]]) end end return results end
run_benchmark! (generic function with 1 method)
We will consider symplectic integrators first
symplectic_integrators = [ VelocityVerlet, #VerletLeapfrog, PseudoVerletLeapfrog, McAte2, #CalvoSanz4, #McAte5, Yoshida6, #KahanLi8, #SofSpa10 ]
4-element Array{DataType,1}:
VelocityVerlet
PseudoVerletLeapfrog
McAte2
Yoshida6
Let us run a short simulation to see the cost per timestep for each method
config(integrators, c, τ) = [ (alg=a, dt=τ*cₐ) for (a,cₐ) in zip(integrators, c)] t = 35.0 τs = 1e-3 # warmup c_symplectic = ones(length(symplectic_integrators)) benchmark(Dict(), Dict(), Dict(), Dict(), Dict(), Dict(), 10., config(symplectic_integrators, c_symplectic, τs)) results = DataFrame(:integrator=>String[], :runtime=>Float64[], :τ=>Float64[], :EnergyError=>Float64[], :timesteps=>Int[], :f_evals=>Int[], :cost=>Float64[]); run_benchmark!(results, t, symplectic_integrators, τs)
| integrator | runtime | τ | EnergyError | timesteps | f_evals | cost | |
|---|---|---|---|---|---|---|---|
| String | Float64 | Float64 | Float64 | Int64 | Int64 | Float64 | |
| 1 | VelocityVerlet | 58.1689 | 0.001 | 0.00366562 | 35000 | 70002 | 1.0 |
| 2 | PseudoVerletLeapfrog | 58.7572 | 0.001 | 0.0208884 | 35000 | 105002 | 1.0 |
| 3 | McAte2 | 57.1342 | 0.001 | 0.00980374 | 35000 | 105002 | 1.0 |
| 4 | Yoshida6 | 227.894 | 0.001 | 0.0219439 | 35000 | 525002 | 1.0 |
The cost of a timestep can be computed as follows
c_symplectic .= results[!, :runtime] ./ results[!, :timesteps] c_Verlet = c_symplectic[1] c_symplectic /= c_Verlet
4-element Array{Float64,1}:
1.0
1.010114030346147
0.982211928654614
3.9178041722138084
were we have normalized the cost to the cost of a VelocityVerlet step.
Let us now benchmark the solvers for a fixed simulation time and variable timestep
t = 45.0 τs = 10 .^range(-4, -3, length=10) results = DataFrame(:integrator=>String[], :runtime=>Float64[], :τ=>Float64[], :EnergyError=>Float64[], :timesteps=>Int[], :f_evals=>Int[], :cost=>Float64[]); run_benchmark!(results, t, symplectic_integrators, τs, c=c_symplectic)
| integrator | runtime | τ | EnergyError | timesteps | f_evals | cost | |
|---|---|---|---|---|---|---|---|
| String | Float64 | Float64 | Float64 | Int64 | Int64 | Float64 | |
| 1 | VelocityVerlet | 771.301 | 0.0001 | 0.00205374 | 450000 | 900002 | 1.0 |
| 2 | PseudoVerletLeapfrog | 758.32 | 0.000101011 | 0.00215171 | 445495 | 1336487 | 1.01011 |
| 3 | McAte2 | 773.276 | 9.82212e-5 | 0.000271679 | 458150 | 1374452 | 0.982212 |
| 4 | Yoshida6 | 786.325 | 0.00039178 | 0.00343872 | 114861 | 1722917 | 3.9178 |
| 5 | VelocityVerlet | 603.311 | 0.000129155 | 0.00346887 | 348419 | 696840 | 1.0 |
| 6 | PseudoVerletLeapfrog | 591.641 | 0.000130461 | 0.00349313 | 344931 | 1034795 | 1.01011 |
| 7 | McAte2 | 610.059 | 0.000126858 | 0.00336253 | 354729 | 1064189 | 0.982212 |
| 8 | Yoshida6 | 611.417 | 0.000506004 | 0.00176099 | 88933 | 1333997 | 3.9178 |
| 9 | VelocityVerlet | 468.343 | 0.00016681 | 0.00243857 | 269768 | 539538 | 1.0 |
| 10 | PseudoVerletLeapfrog | 452.927 | 0.000168497 | 0.00789853 | 267067 | 801203 | 1.01011 |
| 11 | McAte2 | 463.071 | 0.000163843 | 0.0015374 | 274654 | 823964 | 0.982212 |
| 12 | Yoshida6 | 470.527 | 0.000653529 | 0.00297477 | 68857 | 1032857 | 3.9178 |
| 13 | VelocityVerlet | 361.319 | 0.000215443 | 0.0108509 | 208872 | 417746 | 1.0 |
| 14 | PseudoVerletLeapfrog | 356.287 | 0.000217622 | 0.00485493 | 206781 | 620345 | 1.01011 |
| 15 | McAte2 | 361.206 | 0.000211611 | 0.00613404 | 212655 | 637967 | 0.982212 |
| 16 | Yoshida6 | 359.016 | 0.000844065 | 0.0677113 | 53314 | 799712 | 3.9178 |
| 17 | VelocityVerlet | 276.663 | 0.000278256 | 0.00496142 | 161722 | 323446 | 1.0 |
| 18 | PseudoVerletLeapfrog | 276.259 | 0.00028107 | 0.00207705 | 160103 | 480311 | 1.01011 |
| 19 | McAte2 | 285.025 | 0.000273306 | 0.000688371 | 164651 | 493955 | 0.982212 |
| 20 | Yoshida6 | 284.42 | 0.00109015 | 0.0185529 | 41279 | 619187 | 3.9178 |
| 21 | VelocityVerlet | 213.536 | 0.000359381 | 0.000369399 | 125216 | 250434 | 1.0 |
| 22 | PseudoVerletLeapfrog | 211.668 | 0.000363016 | 0.00558955 | 123962 | 371888 | 1.01011 |
| 23 | McAte2 | 218.872 | 0.000352989 | 0.00523221 | 127483 | 382451 | 0.982212 |
| 24 | Yoshida6 | 218.518 | 0.00140799 | 0.00593719 | 31961 | 479417 | 3.9178 |
| 25 | VelocityVerlet | 164.244 | 0.000464159 | 0.00704297 | 96950 | 193902 | 1.0 |
| 26 | PseudoVerletLeapfrog | 163.67 | 0.000468853 | 0.00314934 | 95979 | 287939 | 1.01011 |
| 27 | McAte2 | 171.925 | 0.000455902 | 0.00749259 | 98706 | 296120 | 0.982212 |
| 28 | Yoshida6 | 169.297 | 0.00181848 | 0.000755624 | 24746 | 371192 | 3.9178 |
| 29 | VelocityVerlet | 129.679 | 0.000599484 | 0.0267297 | 75065 | 150132 | 1.0 |
| 30 | PseudoVerletLeapfrog | 127.169 | 0.000605547 | 0.00661891 | 74313 | 222941 | 1.01011 |
| 31 | McAte2 | 130.723 | 0.000588821 | 0.0105314 | 76424 | 229274 | 0.982212 |
| 32 | Yoshida6 | 130.937 | 0.00234866 | 0.0323632 | 19160 | 287402 | 3.9178 |
| 33 | VelocityVerlet | 100.903 | 0.000774264 | 0.0220466 | 58120 | 116242 | 1.0 |
| 34 | PseudoVerletLeapfrog | 98.3957 | 0.000782095 | 0.0263143 | 57538 | 172616 | 1.01011 |
| 35 | McAte2 | 100.697 | 0.000760491 | 0.00945157 | 59173 | 177521 | 0.982212 |
| 36 | Yoshida6 | 103.34 | 0.00303341 | 0.033734 | 14835 | 222527 | 3.9178 |
| 37 | VelocityVerlet | 77.3016 | 0.001 | 0.0128268 | 45001 | 90004 | 1.0 |
| 38 | PseudoVerletLeapfrog | 75.6741 | 0.00101011 | 0.00695002 | 44550 | 133652 | 1.01011 |
| 39 | McAte2 | 78.5404 | 0.000982212 | 0.00132507 | 45815 | 137447 | 0.982212 |
| 40 | Yoshida6 | 80.9294 | 0.0039178 | 0.0533705 | 11487 | 172307 | 3.9178 |
The energy error as a function of runtime is given by
@df results plot(:EnergyError, :runtime, group=:integrator, xscale=:log10, yscale=:log10, xlabel="Energy error", ylabel="Runtime (s)")
Looking at the runtime as a function of timesteps, we can observe that we have a linear dependency for each method, and the slope is the previously computed cost per step.
@df results plot(:timesteps, :runtime, group=:integrator, xscale=:log10, yscale=:log10, xlabel="Number of timesteps", ylabel="Runtime (s)")
We can also consider a longer simulation time
t = 100.0 τs = 10 .^range(-4, -3, length=5) results = DataFrame(:integrator=>String[], :runtime=>Float64[], :τ=>Float64[], :EnergyError=>Float64[], :timesteps=>Int[], :f_evals=>Int[], :cost=>Float64[]); #run_benchmark!(results, t, symplectic_integrators, τs, c=c_symplectic)
The energy error as a function of runtime is given by
#@df results plot(:EnergyError, :runtime, group=:integrator, # xscale=:log10, yscale=:log10, xlabel="Energy error", ylabel="Runtime (s)")
We can also look at the energy error history
function benchmark(energyerr, rts, ts, t, configs) simulation = setup(t) prob = SecondOrderODEProblem(simulation) for config in configs alg = config.alg sol, rt = @timed solve(prob, alg(); progress=true, progress_name="$alg", config...) result = NBodySimulator.SimulationResult(sol, simulation) ΔE(t) = total_energy(result, t) - total_energy(result, 0) energyerr[alg] = [ΔE(t) for t in sol.t[2:end]] rts[alg] = rt ts[alg] = sol.t[2:end] end end ΔE = Dict() rt = Dict() ts = Dict() configs = config(symplectic_integrators, c_symplectic, 2.3e-4) benchmark(ΔE, rt, ts, 45., configs) plt = plot(xlabel="Rescaled Time", ylabel="Energy error", legend=:bottomleft); for c in configs plot!(plt, ts[c.alg], abs.(ΔE[c.alg]), label="$(c.alg), $(rt[c.alg])s", yscale=:log10) end plt